31 research outputs found

    MultiVeStA: Statistical Model Checking for Discrete Event Simulators

    Get PDF
    The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. Due to the size and complexity of the considered systems, an approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (typically logics) are used to express systems properties of interest. Such properties can then be automatically estimated by tools performing simulations of the model at hand. These property specifications languages, often not popular among engineers, provide a formal, compact and elegant way to express systems properties without needing to hard-code them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with existing discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities

    Statistical analysis of chemical computational systems with MULTIVESTA and ALCHEMIST

    Get PDF
    The chemical-oriented approach is an emerging paradigm for programming the behaviour of densely distributed and context-aware devices (e.g. in ecosystems of displays tailored to crowd steering, or to obtain profile-based coordinated visualization). Typically, the evolution of such systems cannot be easily predicted, thus making of paramount importance the availability of techniques and tools supporting prior-to-deployment analysis. Exact analysis techniques do not scale well when the complexity of systems grows: as a consequence, approximated techniques based on simulation assumed a relevant role. This work presents a new simulation-based distributed tool addressing the statistical analysis of such a kind of systems, which has been obtained by chaining two existing tools: MultiVeStA and Alchemist. The former is a recently proposed lightweight tool which allows to enrich existing discrete event simulators with distributed statistical analysis capabilities, while the latter is an efficient simulator for chemical-oriented computational systems. The tool is validated against a crowd steering scenario, and insights on the performance are provided by discussing how these scale distributing the analysis tasks on a multi-core architecture

    Where Is the Power? Transnational Networks, Authority and the Dispute over the Xayaburi Dam on the Lower Mekong Mainstream

    Get PDF
    Accounts of hydro-hegemony and counter hydro-hegemony provide state-based conceptions of power in international river basins. However, authority should be seen as transnationalized as small states develop coping strategies to augment their authority over decision-making processes. The article engages Rosenau’s spheres of authority concept to argue that hydro-hegemony is exercised by actors embedded in spheres of authority that reshape actor configurations as they emerge. These spheres consist of complex networks challenging customary notions of the local-global dichotomy and hydro-hegemony. Hydro-hegemony is therefore not fixed. The article examines these processes by analysing the dispute over the Xayaburi Dam in the Mekong Basin

    A cooperative approach for distributed task execution in autonomic clouds

    Get PDF
    Virtualization and distributed computing are two key pillars that guarantee scalability of applications deployed in the Cloud. In Autonomous Cooperative Cloud-based Platforms, autonomous computing nodes cooperate to offer a PaaS Cloud for the deployment of user applications. Each node must allocate the necessary resources for customer applications to be executed with certain QoS guarantees. If the QoS of an application cannot be guaranteed a node has mainly two options: to allocate more resources (if it is possible) or to rely on the collaboration of other nodes. Making a decision is not trivial since it involves many factors (e.g. the cost of setting up virtual machines, migrating applications, discovering collaborators). In this paper we present a model of such scenarios and experimental results validating the convenience of cooperative strategies over selfish ones, where nodes do not help each other. We describe the architecture of the platform of autonomous clouds and the main features of the model, which has been implemented and evaluated in the DEUS discrete-event simulator. From the experimental evaluation, based on workload data from the Google Cloud Backend, we can conclude that (modulo our assumptions and simplifications) the performance of a volunteer cloud can be compared to that of a Google Cluster

    A Computational Field Framework for Collaborative Task Execution in Volunteer Clouds

    Get PDF
    The increasing diffusion of cloud technologies is opening new opportunities for distributed and collaborative computing. Volunteer clouds are a prominent example, where participants join and leave the platform and collaborate by sharing their computational resources. The high dynamism and unpredictability of such scenarios call for decentralized self-* approaches to guarantee QoS. We present a simulation framework for collaborative task execution in volunteer clouds and propose one concrete instance based on Ant Colony Optimization, which is validated through a set of simulation experiments based on Google workload data

    Reputation-based Cooperation in the Clouds

    Get PDF
    The popularity of the cloud computing paradigm is opening new opportunities for collaborative computing. In this paper we tackle a fundamental problem in open-ended cloud-based distributed comput- ing platforms, i.e., the quest for potential collaborators. We assume that cloud participants are willing to share their computational resources for shared distributed computing problems, but they are not willing to dis- closure the details of their resources. Lacking such information, we advo- cate to rely on reputation scores obtained by evaluating the interactions among participants. More specifically, we propose a methodology to as- sess, at design time, the impact of different (reputation-based) collabo- rator selection strategies on the system performance. The evaluation is performed through statistical analysis on a volunteer cloud simulator

    A Holistic Approach for Collaborative Workload Execution in Volunteer Clouds

    Get PDF
    The demand for provisioning, using, and maintaining distributed computational resources is growing hand in hand with the quest for ubiquitous services. Centralized infrastructures such as cloud computing systems provide suitable solutions for many applications, but their scalability could be limited in some scenarios, such as in the case of latency-dependent applications. The volunteer cloud paradigm aims at overcoming this limitation by encouraging clients to offer their own spare, perhaps unused, computational resources. Volunteer clouds are thus complex, large-scale, dynamic systems that demand for self-adaptive capabilities to offer effective services, as well as modeling and analysis techniques to predict their behavior. In this article, we propose a novel holistic approach for volunteer clouds supporting collaborative task execution services able to improve the quality of service of compute-intensive workloads. We instantiate our approach by extending a recently proposed ant colony optimization algorithm for distributed task execution with a workload-based partitioning of the overlay network of the volunteer cloud. Finally, we evaluate our approach using simulation-based statistical analysis techniques on a workload benchmark provided by Google. Our results show that the proposed approach outperforms some traditional distributed task scheduling algorithms in the presence of compute-intensive workloads

    Enriching volunteer clouds with self-* capabilities

    No full text
    Provisioning, using and maintaing computational resources as services is a hard challenge. On the one hand there is an increasing demand of such services due to the increasing role of software in our society, while on the other hand the amount and variety computational resources is growing due to the pervasiveness of computational devices in our lives. The complexity of such problem can only be mastered by resorting to suitable technologies based on well-studied paradigms. Three prominent examples and ICT trends of the last decade are (i) cloud computing, which promotes the idea of computational resources as services; (ii) autonomic computing, which aims at minimizing the amount of human intervention and automatizing many aspects of a system’s life-cycle; and (iii) volunteer computing, which promotes the idea of achieving complex tasks by fostering the collaboration among peers. This thesis proposes an approach based on the combination of the above mentioned paradigms (i)–(iii) for the design and evaluation of volunteer cloud platforms providing a service for executing simple tasks. The major problem under consideration is the selection of the mechanisms used by cloud participants to collaborate for providing such service. The main contributions of the thesis are: (1) an architecture and a model for volunteer cloud platforms; (2) a discrete event simulator for such model; (3) the extension of a statistical analysis tool to ease the analysis; (4) novel self-* strategies for collaboration among volunteers, mainly inspired by multi-agent systems and AI techniques, evaluated with the simulator using the Google Backend workload
    corecore